A fast and efficient Modal EM algorithm for Gaussian mixtures

نویسندگان

چکیده

In the modal approach to clustering, clusters are defined as local maxima of underlying probability density function, where latter can be estimated either nonparametrically or using finite mixture models. Thus, closely related certain regions around modes, and every cluster corresponds a bump density. The Modal Expectation-Maximization (MEM) algorithm is an iterative procedure that identify any function. this contribution, we propose fast efficient MEM used when function through Gaussian distributions with parsimonious component-covariance structures. After describing procedure, apply proposed on both simulated real data examples, showing its high flexibility in several contexts.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Asymptotic Convergence Rate of the EM Algorithm for Gaussian Mixtures

It is well known that the convergence rate of the expectation-maximization (EM) algorithm can be faster than those of convention first-order iterative algorithms when the overlap in the given mixture is small. But this argument has not been mathematically proved yet. This article studies this problem asymptotically in the setting of gaussian mixtures under the theoretical framework of Xu and Jo...

متن کامل

On Convergence Properties of the EM Algorithm for Gaussian Mixtures

We build up the mathematical connection between the ”ExpectationMaximization” (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix P, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special ...

متن کامل

Efficient EM Training of Gaussian Mixtures with Missing Data

In data-mining applications, we are frequently faced with a large fraction of missing entries in the data matrix, which is problematic for most discriminant machine learning algorithms. A solution that we explore in this paper is the use of a generative model (a mixture of Gaussians) to compute the conditional expectation of the missing variables given the observed variables. Since training a G...

متن کامل

A Component-wise EM Algorithm for Mixtures

In some situations, EM algorithm shows slow convergence problems. One possible reason is that standard procedures update the parameters simultaneously. In this paper we focus on nite mixture estimation. In this framework, we propose a component-wise EM, which updates the parameters sequentially. We give an interpretation of this procedure as a proximal point algorithm and use it to prove the co...

متن کامل

A Two-Round Variant of EM for Gaussian Mixtures

We show that, given data from a mixture of k well-separated spherical Gaussians in !Rn, a sim­ ple two-round variant of EM will, with high probability, learn the centers of the Gaussians to near-optimal precision, if the dimension is high (n » log k). We relate this to previous theoreti­ cal and empirical work on the EM algorithm.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Statistical Analysis and Data Mining

سال: 2021

ISSN: ['1932-1864', '1932-1872']

DOI: https://doi.org/10.1002/sam.11527